Udostępnij za pośrednictwem


Equivalence class partitioning: is it real or is it a figment in our imagination?

Last week I attended the Software Testing and Performance conference in Boston. I presented a workshop on Systematic Testing Techniques, as well as a talk on random test data generation, and combinatorial analysis. One way I continue to learn about our profession and increase my own knowledge is by going to conferences to hear different points of view from practitioners from around the world. So, I also attended several talks during the conference, but there was one talk in particular that was particularly entertaining (and I don't mean that in a good way).

When I listen to other testers sometimes I hear something that is new to me and I desire to learn more about it. Sometimes I hear something prophetic that makes me think, contemplate alternatives, or reflect more deeply on my own personal perspectives. Sometimes I hear something revolutionary that causes me to reevaluate my position. And, sometimes I hear something so irrational I almost barf up a lung!

In this case the speaker opened his talk with an attack on a quote from the ISTQB foundation syllabus used to describe boundary testing which states, "Behavior at the edge of each equivalence partition is more likely to be incorrect..." Now I know the speaker a bit, and I know he disdains the ISTQB and other certification organizations, but what surprised me was his initial rebuttal by emphatically stating, "equivalence class partitions are figments of our imaginations!"

These days I usually just try to shake off wild and baseless comments as bombastic bloviations used to generate controversy. But, in this case what caught my attention was when the speaker later said that he and another well-known person defined boundaries as "a dividing point between two otherwise contiguous regions of behavior; or a principle or mechanism by which things are classified into different sets." What!? I couldn't believe what I heard, so I had to stop reading email and look up at the presentation. As I visually processed the words I thought my head was going to explode from the so seemingly obvious contradiction.

Now, I am not a linguistic expert, but I am pretty sure that "otherwise contiguous regions of behavior" and "classifying things into different sets" are just overly simplistic ways of describing equivalence class partitions. But, I could be wrong. So, I began thinking that since most people start learning about sets in elementary schools they probably understand the foundation of equivalence class partitioning is set theory which basically states "a set is an aggregate, class, or collection of objects," and the collection of objects or 'classification of things' in different sets is based on an equivalent relation between the elements in each set. The application of equivalence class partitions in our profession is elegantly explained by Lee Copeland in his excellent book A Practitioner's Guide to Software Test Design by stating "An equivalence class consists of a set of data that is treated the same by the module or that should produce the same result." Equivalence class partitioning is also discussed in-depth in books by noted experts in the industry such as Beizer, Binder, Myers, Jorgensen, Perry, and Marick just to name a few.

In fact, the concept of sets and equivalence almost seems instinctive in most humans and is generally expressed at a young age. I remember my young daughter at age 2 or so separating beads by color into "different sets" on the carpet. The red beads in one group, blue in another, and so on. She was diligent to make sure the different sets of beads did not touch as she put them into the appropriate piles. If a pile of beads got to close to another pile she would run the edge of her hand between the "contiguous regions" to clearly delineate the "dividing point."  When I asked her to get me a red bead, she would randomly grab one from the pile, because all the red beads were...red, and there were no significant differences among the red beads (elements) in the set she created that were relevant in that context of that game.

Perhaps the majority of the industry's experts are wrong and I wasted my time reading books on software testing principles and practices because this person is right and equivalence class partitioning is really only a figment of our imagination.

However, on the other hand, although I certainly have never claimed to be an expert, I am still pretty darn sure the underlying foundation of computers and computer software is somewhat influenced by mathematical principles, and as a tester I might be able to use those same principles to help me design effective tests that might help me better evaluate discrete functional capabilities and attributes of software components and more efficiently expose certain categories or patterns of errors.

But, why should we get mired down and confused with facts (especially all that boring math stuff) when it is much easier to appeal to some peoples' emotions. So,forget everything you just read...and if anyone asks why testing is so hard just tell them testing is an art with no practical foundation in logic because software is...well, it's just magic!

Comments

  • Anonymous
    September 29, 2008
    PingBack from http://www.easycoded.com/equivalence-class-partitioning-is-it-real-or-is-it-a-figment-in-our-imagination/

  • Anonymous
    September 29, 2008
    The comment has been removed

  • Anonymous
    September 30, 2008
    The comment has been removed

  • Anonymous
    September 30, 2008
    Some functions really do partition sets into equivalence classes.  Those equivalence classes are real. Some relations yield a bunch of subsets of a set but don't always yield partitions.  When someone falsely asserts that the resulting subsets are equivalence classes, those equivalence classes are the figment of the asserter's imagination. Sometimes there exist equivalence classes which would be useful in constructing tests but the person makes mistakes in determining what the equivalence classes are. But the assertion that there are no equivalence classes is just as wrong itself.  It sounds like the speaker had used Windows 95.  Ignore him.  Use other versions of Windows and you can partition into equivalence classes.

  • Anonymous
    September 30, 2008
    >>> If you rephrased it as "equivalence classes are models" then I would agree. OK, not much different from what I said. If I break a model into several fragments - what would be each of these fragments called - models? models or fragments of models - not much difference. >>> If a tester's only view of software is through the user interface and they do not understand the fundamentals of programming concepts then their ability to form accurate equivalence classes of data sets is going to be severely limited. You keep saying this " view throuh GUI...". What made you to believe that is how testers view software? You might have seen some "Bad" examples. I have seen many good testers who can see deep till OS while formulating Eq. classes. At least I do not approach modeling/view of the software like that. >>> The biggest weakness of ECP is the pathetic attempts by some people who lack a strong understanding of domain they are trying to apply it to, and then claim it to be inadequate. I don't agree. ECP's fundamental treatment is mathematical. How can a mathematical model represent real time usage of software and say "this set of input values will be treated in same way"? No matter what domain knowledge you bring in ... at the end of the say ECP gives you a set of values and stamps them as "equivalent". No big deal at all. >>> Your example is again without context, so your classification "A" and "B" as equivalent is meaningless. Thank you for saying this. That is how I have seen ECP being applied that is state of practice. You need to look beyond the example .. How can one say for sure that two values of a "well" defined equivalence class will be processed by the application (and platform) in same way? You did not comment for this. >>>> Also, I happen to think reading books about my chosen profession is important. Just as doctors continue to read books and journals about medicine I think it is important for software testers to really understand the discipline beyond the ability to find bugs through the user interface. In fact, by reading these books I began rigorously questioning my own thoughts and perspectives as well as those of others in the industry. The difference is that my questions were specific, topical and direct within the context of software testing and software engineering. Of course, this is predicated on the fact that people went to school to learn how to think for themselves before entering the workforce (which may not be true in all cases.) In other words, reading is easy...comprehension of what is read is a completely different matter. Totally agree. >>> Finally, as I said before, I tend to choose my words very carefully. There is no need to rephrase, reframe, or quess at what I mean .. For me to understand what you are saying, I tend to do rephrase. At times, I feel that rephrase is necessary so that I can bring your thinking to my level so that I can comment. IMHO, you should allow rephrase so that people can interpret your words in their vocabalary. If they mis-interpret, you can correct and state your view point. >>> The discipline of software testing is not unique in the requirement of its practitioners to think critically, apply their cognitive skills, or to ask rational questions to help us learn within appropriate contexts. These are traits common among many professionals. So what are we doing about it? Go and study only computer science, OS, programming, networking and possibly some maths? By saying that every other field requires cognitive and thinking skills, we are actually killing the importance of it in our testing craft. We can learn greatly about thinking and analysis by studying "general systems theory", "philosohy" and many other desciplines that help to improvise our thinking. Shrini

  • Anonymous
    September 30, 2008
    The comment has been removed

  • Anonymous
    October 01, 2008
    The comment has been removed

  • Anonymous
    October 01, 2008
    Sorry for picking on a couple of tangents this time. "This is an overly simplistic view, but I think you can understand that the underlying foundation of computers, computer science, and software programming is..., and I am just guessing here, I suspect mathematics?" Computer scientists start out thinking so, but real computer scientists learn better.  Study enough mathematics and you learn that it all depends on proof theory.  Proof theory depends on methods of proof.  Methods of proof essentially depend on Turing machines.  So the underlying foundation of all of mathematics is computer science. Now, people make mistakes in things that they think are proofs, and often they get caught.  Computers can help prove proofs, but sometimes people make mistakes in programs that they thought were helping prove proofs, so sometimes they don't really help.  So, practically speaking, there isn't much benefit from the observation that all of mathematics depends on computer science.  Human work is still needed to fix bugs caused by human frailty. "(I don't get paid to guess either.)" You don't?  How do you get your work done then? When designing a program sometimes I have to guess what a user needs.  On lucky occasions I can ask and then the users themselves get paid to guess ^_^  but otherwise I'm the one guessing.  When debugging, I get paid to guess all the time.  While reading code to see what it does, sometimes I make guesses about what parts of it look suspicious, and sometimes make further guesses about which parts are likely to influence some particular bug.  Those kinds of guesses are maybe around 70% wrong and 90% wrong, respectively.  Sometimes I set breakpoints in the code.  Maybe 98% of the breakpoints that I set end up being in places that don't have bugs. "there will always be a probability of error because we know that we can't test everyting!" Exactly true.  This is why it must be permitted to add tests when bugs are found, even when bugs are found by people other than testers.

  • Anonymous
    October 01, 2008
    Hi Norman, Interesting perspective, but I will emphatically disagree. Turing machines simply model the logic of a computer algorithm. There are also many academic studies that refute the Turing thesis myth. In fact, Alan Turing was a mathmatician and much of his work influenced the study of computer science. But, the purpose of my statement was an attempt to explain the relationship of mathematics in computers and software programs. Perhaps I should have left out "computer science" since the relationship between the study of computer science and software engineering may be somewhat debatable. My connotative usage of the word 'guessing' implies "forming an opinion without sufficient evidence," or "to risk an opinion regarding something one does not know about, or, wholly or partly by chance." So, I really don't suspect that when you set breakpoints in your code that you do it "wholly or partly by chance." I suspect that you are deducing or infering in which case you making a decision based on cognition, evidence, or logical reasoning. Sometimes our decision is wrong and we learn from those mistakes. (We can also learn from guessing...that is often refered to as the school of hard knocks.) So, I really don't suspect that you just 'guess' at where to set a breakpoint...or maybe you do...personally, I perfer to use logic and reasoning.