Do testers need programming skills?

The debate over whether testers need to at least understand programming concepts is still raging within the discipline. To me this debate is puzzling because it seems to suggest that as a professional, I don't have to really understand or be completely proficient in critical aspects of my trade. Even Cem Kaner noted, "I think that the next generation of testers will have to have programming skills." Actually, there was a time not so long ago when testers had to have programming skills, so it is nice that Cem now acknowledges that skill as useful in testing.

Unfortunately, occasionally even within Microsoft a few people still want to differentiate between STE and SDET by blindly assuming that STE meant non-programming testers. The fact is, that the old STE ladder level guidelines clearly stated skills such as debugging production code, and design and develop effective automation as required skills for Microsoft testers. Unfortunately, some managers chose to selectively ignore these skill requirements and some groups chose to differentiate between GUI testers and  any tester who could write code by labeling them as STE and SDET respectively.  (This was a horrible abomination of job titles in my opinion.) The new SDET competencies at Microsoft are designed, and supposed to be implemented in a manner, to reinforce the essential skills we expect from our testers so a tester at a certain level in their career stage in one business unit essentially has equitable skills of any other tester at the same level in their career stage in any group in the company.

But, people are often resistant to change, and as I wrote in my last post some people choose to wallow in self-pity, pretend they are a victim of some evil plot, hypercriticize change with dogmatic arrogance, and incessantly bemoan dubiously negative aspects of change from an often overly emotional narrow-minded perspective. A person who moved from a testing role to program management stated, "I was a tester because I understand how users think and how they use products and I wanted to use that knowledge to make our software better." Really? We make software better by beating quality into it? Does this demonstrate a good understanding of software processes and good business logic? I only ask these questions because it is pretty well-known that it is much cheaper to prevent defects, and that many defects can be found in the design process. So, I am asking myself why in the world didn't this person start as a Program Manager (responsible for interpreting marketing analysis and customer feedback into requirements and product design) or become one before now? What is even more amazing about this statement is that it doesn't even acknowledge the fact that as a program manager this person is now in a role that should have a direct connection to the customer and a greater impact on making our software better. A development strategy or process that emphasizes customer advocacy primarily in the testing phases is ridiculously immature and a gross waste of resources since it is widely known through empirical studies that it is cheaper to prevent defects by better designs and clear requirements as opposed to finding them during a testing cycle.

The same person stated, "I wanted to keep breaking software in the incredibly fun, very effective way I had been doing." (Personally, I find API testing (which can also use a black-box approach), and white box test design extremely fun and intellectually challenging, and is also very effective when used appropriately.) Unfortunately, this comment seems to perpetuate a myth that testers make software better by finding bugs, and it also demonstrates an extremely limited view of the role and overall potential value of software testing to an organization. This is indeed a very narrow, antiquated (in technology time), and immature view of software testing that emphasizes testing as primarily a bug finding endeavor. However, Beizer wrote that black box testing exercises approximately 35 - 65% of the product, and Marne Hutcheson and I have empirical data that demonstrates that GUI testing (which is one type of black box testing most people are familiar with, and the type of testing most non-technical testers are limited to perform) is not as effective as most people want to believe, and it is often more costly as compared to using a variety of approaches to software testing. Again, even Kaner notes, "Programmer productivity has grown dramatically over the years, a result of paradigmatic shifts in software development practice. Testing practice has evolved less dramatically and our productivity has grown less spectacularly. This divergence in productivity has profound implications—every year, testers impact less of the product. If we continue on this trajectory, our work will become irrelevant because its impact will be insignificant." (I highly suspect the 'testers' Kaner is referring to in this context are primarily non-technical, GUI testers since that is the type of testing emphasized in his BBST course.)

There is no doubt that a person who does not at least understand programming concepts or have an in-depth technical understanding of the system they are testing is unable to perform various activities that may be required in the role of a professional software tester. That person cannot perform code reviews (which have been proven to find certain classes of defects more effectively than any other type of testing); they cannot analyze code to determine which areas of the code have not been tested and design tests from a white-box approach to increase testing effectiveness and reduce risk, they cannot debug errors and identify root causes of defects, they cannot automate tests to free up their time or reduce costs during the maintenance phase of the product lifecycle, they may not be able to adequately analyze and decompose test data, etc. While some companies don't rely on their testers to do this type of work, these are certainly tasks that any professional tester should be able to perform.

I guess there are some software companies that are not interested in actually maturing their processes, reducing long term costs, have no interest in the intellectual property value of testing artifacts, or simply want to continue to rely primarily on GUI testing to get a 'gut-feel' of their product before releasing it. However, many large companies that produce software (Microsoft, Cisco, Google, Siemens, etc.) understand the value add proposition that professional testers provide to the organization health and specifically hire people into testing roles who have both broad technical skills as well as the common traits we tend to associate with good testers.

This post is not to question the need for non-technical people who have in-depth and current domain or business knowledge of the application space, or who understand the market expectations and customer demands/needs in the software engineering process. The question I ask is whether the value these individuals bring to the software development process is misplaced, or would their contribution be more cost effective and provide greater overall value to the customer if they were in a role (other than testing) that better utilized their knowledge by contributing to defining requirements and designing high quality software rather than trying beat in quality through bug finding?

Comments

  • Anonymous
    January 29, 2008
    I agree that understanding programming is essential for many types of testing.  The question is whether all testers need these skills.  Is usability testing worthwhile ever?  Good design cannot always predict the effect that results, after all. But it is difficult to argue with you, since some of your statements seem to rely so much on supposition of other people's thoughts: "I highly suspect the 'testers' Kaner is referring to..." and "...not as effective as most people want to believe". Is this argument really so binary?  Could not technical testers pair with technical testers?  Is it easier to come at a project with a lot of domain knowledge and pick up the technical side as you go, or vice versa?  And how much technical knowledge is "enough" to be considered a tester?

  • Anonymous
    January 29, 2008
    The comment has been removed

  • Anonymous
    January 29, 2008
    The comment has been removed

  • Anonymous
    January 30, 2008
    The comment has been removed

  • Anonymous
    February 13, 2008
    The comment has been removed

  • Anonymous
    February 13, 2008
    Hi David, You're absolutely right. I was not trying to denigrate Kaner or the BBST effort, and I think Cem has done a lot of great work in the past to put the discipline of testing on the radar so to speak. I was also very glad to see Cem specifically mentioning that testers in the future will need programming skills. Your concern about the pendulum is well grounded. I have witnessed some teams who 'knee-jerked' and hired mediocre programmers as testers. This is simply fool-hearty. I also highly suspect that we will never see a purely automated testing environment in my lifetime. Automation doesn't write itself. Also, there are various types of automation. 80% of the automation we write at MS is below the GUI. If we write GUI automation we tend to rely heavily on abstaction layers, and also model based testing approaches. Of course, we often expect that each automated test will run on multiple environments, mutliple platforms, and multiple langauges as it is distributed to run on various machines throughout the company. But, you're right. I would never advocate pure automated testing. Thanks for your comments...spot on!

  • Anonymous
    February 13, 2008
    "Do testers need programming skills?" It is an advantage but when I compare the test teams I worked with over the years the best results have been when having members with  diversified skills including individuals with strong domain knowledge rather than technical knowledge. The prefered ratio between the different skills are of course depending on the context and nature of the Product. One mandatory requirement for each test team member though is that he/she should have the mindset of a tester...

  • Anonymous
    February 14, 2008
    Hi stth10, Yes, I agree that each memeber of a test team should have the mindset of a tester, I also think that they should also have the skills and knowledge in order to provide the complete spectrum of tasks required of a professional tester. For some reason I think people seem to assume that becuase a person codes, they do not have a diversified set of skills including strong 'domain' knowledge. I simply don't understand that logic. I think the key point we need to understand here is that there is no single best approach to software testing, the the more tools in our toolbox, the better service we provide to our customers.

  • Anonymous
    February 21, 2008
    The comment has been removed

  • Anonymous
    February 22, 2008
    The comment has been removed

  • Anonymous
    February 27, 2008
    This morning I installed Vista SP1 onto my laptop. I was pretty excited about this release because it