Udostępnij za pośrednictwem


How Professional Testers Think: Why Microsoft primarily hires testers with a Computer Science, Math or Engineering background?

The easiest thing to criticize is that which one does not fully comprehend.

There has been a lot of discussion lately about Jerome Groopman's book How Doctor's Think and a correlation between doctors in the medical profession and software testers. The book is an excellent read, and provides readers with valuable insights not only with medicine, but we can certainly abstract many of the fundamental lessons to testing software. Indeed there are many similarities between doctors practicing medicine, and software testers testing software. I won't belabor this point since many others have discussed the similarities, and one only has to read the book to discover the obvious.

However, there are several aspects that some people seemingly blindly overlook when comparing medical doctors or others in the medical profession to software testers. For example, medical doctors are highly educated in biology, chemistry, human anatomy, physiology, etc. In other words, doctors spend a great deal of time being formally educated in subjects directly related to their profession. Compare this with many testers working in the field of software testing. This is not to imply that in order to be a professional tester that one must study computer science at a university; however, I know a great number of testers who have worked in the industry for several years and still do not understand basic programming concepts, lack the ability to perform an in-depth analysis of test data, and are incapable of adequately decomposing or modeling features below the user interface. In other words, testers who lack the basic knowledge of computer systems and software are utterly blind to the internal workings of the system on which they are working. Doctors also spend a great deal of time post graduation reading medical journals and attending medical conferences and workshops to continue their education and build their knowledge and skills. Dorothy Graham, Marne Hutcheson, Steve McConnell, and I have collected a lot of empirical evidence to suggest that very few testers have ever received any formal training in software testing methods, and less than 1% of testers polled have read more than one book on software testing. Lee Copeland presented an excellent talk at a conference entitled "The Nine Forgettings" in which he asks, "How can we call ourselves professional testers if we are ignorant of the fundamental techniques of our craft?"

Doctors also realize there are practical techniques (systematic procedures) commonly used in their profession that are extremely useful in the correct situation and valuable in either diagnosing an ailment or identifying contraindications, and competent doctors know when and how to apply those techniques. Conversely, I often hear testers whom I suspect have a superficial understanding of the overall system or fail to adequately perform an in-depth investigation of the system on which they are working refer to testing techniques as folklore. Who needs techniques...if we pound on the keyboard or touch screen long enough we are surely bound to find a bug! (The only folklore I've ran across in the industry is a belief that only a handful of people really understand, or are even capable of understanding exploratory testing.)

Medical science also has an common body of knowledge and professional jargon that is universally accepted throughout the trade. When doctors refer to exploratory surgery there is a common understanding of what is being discussed. Doctor's don't pointlessly argue how one person's practice of exploratory surgery is different than another person's approach and therefore it is not really the 'exploratory' surgery that "I" preach. Reputable doctors also don't needlessly make up words to make themselves sound like they offer something new or revolutionary. When they discuss medical conditions they discuss things within rational contexts. For example, when someone complains about numbness in the fingers they may perform tests to check for adequate blood flow to the hand, or they may ask the patient about neck injuries or pain in the neck or upper shoulder region to see if the problem might be caused by a pinched nerve. I don't suspect that too many doctors would ask the patient what they had for breakfast, or if they recently stubbed their toe to diagnose numbness in the fingers (but, I am not a medical doctor so that is just guessing on my part). Basically doctor's understand the head bone is connected to the neck bone and don't waste a lot of time hypothesizing "what if the head bone was connected to the knee bone?"

Finally, a really big difference about doctors and many testers is in how they fundamentally approach their job. Doctors are constantly striving to help people stay healthy. In other words, they are trying to prevent the spread of illnesses, or searching for ways to help people from becoming sick to begin with. Of course, this means that doctors must have an in-depth understanding of the 'system,' the types of 'bugs' a system might be exposed to, and how those 'bugs' can act on the 'system' or how the 'system' reacts to those foreign agents.

As Groopman discussed in his book, "There are primary care physicians in every hospital who speak with great sensitivity and concern, and their longtime patients love them, but clinically they are incompetent..." Likewise, I have met many people, some at MS and some in the industry, who still think software testing is simply about finding bugs, and that additional skills or knowledge of the profession is unnecessary if one is a good bug finder. I often listen with amusement at talks or when I read articles by people who profess that testing is about providing information, but the only 'information' they seem to provide is how to expose yet another obscure bug, or denigrate practices by incorrectly applying techniques out of context (such as attempting to apply pairwise analysis on a function requiring sequential inputs then claiming pairwise analysis is not a best practice because it is not good at finding bugs while failing to mention where and when the technique is best applied and other types of information that the technique provides such as increased code coverage). As I said in the opening, it is easy to criticize those things we least understand (unless of course we are purposefully obfuscating facts for personal motives). And the uselessness of any critique is only exacerbated by irrational arguments, illogical alternatives, or personal attacks.

So, how does all this relate to Microsoft's hiring practices and why we primarily hire people with computer science backgrounds?

Easy, graduate coming from university with a computer science background have a strong understanding of the 'system' and the 'system internals' much like the doctor understands physiology and human anatomy. This doesn't mean that we only hire testers with a computer science degree. Myself, and many other testers at Microsoft do not have a computer science degree; however, we have also not stagnated or simply rested on our ability to find bugs. We have constantly strived to improve our technical skills and overall understanding of computer systems, software testing practices, and testing knowledge. Interestingly enough, Microsoft career job guidelines have always required testers to be able to automate tests, debug production code, and design tests from both a white box and black box perspective, etc. The difference now is that those skill requirements are universally applied across the company.

Also, much of our internal training can now be based on a common baseline of expectations. For example, when we teach testers how to design tests from a white box perspective, or when I talk about the best practices of code reviews I expect that the testers already know how to read code. Our automation courses require that testers already know how to write code using procedural programming and object oriented programming approaches. Our training simply exposes testers to namespaces, classes, and methods commonly used in test automation (not in application development), and mentor them on how to design tests that will provide value to the organization from an automated testing perspective. (Also, I can attest that our testers are very good critical thinkers and not simply mindless droids who bang on keyboards or pump out simple rote, prescriptive scripts (similar to what is described here) and label them automated tests.

Finally, one of the things we reinforce in our training and throughout a person's career growth at Microsoft is driving quality upstream and defect prevention. We have senior testers who are mentoring developers in how to design and write better unit tests. Our testers are engaged in design and code reviews. Many teams now rely on testers to debug problems to the line of code in order to identify patterns of problems. We have testers who are doing root cause analysis of specific classes or types of defects and building tools or enabling processes to identify those types of defects earlier in the project or development cycle. And yes, our testers still design and execute a lot of manual tests. They use the product in development everyday by dog-fooding. And most testers even participate in newsgroups and sit with or listen in on product support calls to indirectly hear from end users.

So, much like you probably seek out the best possible medical doctor as a primary care provider or when struck with an illness, at Microsoft we want highly competent individuals in our software testing roles who understand that testing is a huge challenge and requires a great breadth of skills and knowledge in order to test software from various perspectives and approaches, and who possess a great deal of in-depth knowledge of the system they are working on. We seek professionals who realize that constantly bemoaning the inadequacies or drawbacks of an approach or perspective solve nothing, and instead actively engage in finding great, scalable solutions to problems in order to reduce testing costs and help drive overall quality upstream.

In a dynamic industry that changes rapidly as professional testers we must constantly reevaluate our skills and knowledge to stay abreast of changes in technology, and learn more about our chosen profession. We must be vigilant and strive unceasingly to improve our skills and knowledge of computers, computer software, and our profession in order to remain competitive with our peers and remain a valuable team member in our organization or to our employer.

Comments

  • Anonymous
    December 27, 2007
    The easiest thing to criticize is that which one does not fully comprehend. There has been a lot of discussion

  • Anonymous
    December 30, 2007
    I deliver training to MS recruiters in Bldg 19 and this topic came up in my last class. I posted a follow-up blog titled "The Eight Habits of Highly Effective Software Testers" at http://jamesmccaffrey.spaces.live.com/blog/cns!504C7CC53E7E7FE8!477.entry which generally echos your thoughts.

  • Anonymous
    March 04, 2011
    The comment has been removed

  • Anonymous
    March 04, 2011
    Hi Curtis, I am not sure what assertion you are challenging. The main point of my message is

  • Doctors study a wide range of subjects in order better understand the "systems" in their human patients
  • IMHO, testers should also have a greater understanding of the "systems" in their software domain beyond simply being a power-user. Of course, great doctors also have a great beside manner. Great testers also have a nack for customer empathy. But, customer empathy and being an expert in your professional field are not mutually exclusive traits. Honestly, I don't watch much television so I can't comment on your assesment of a fictional doctor and his staff compared with the realities of a globally successful software company with a growing list of satisfied customers.