How I view OneNote automation
I had an interesting discussion this last week with a tester about the purpose of automation for OneNote. She was acting on the assumption that the purpose of automation was to accurately mimic OneNote user's behavior and was struggling with a technical detail she had (how to verify OneNote was properly accessing a file).
The viewpoint I hold is automation is a tool whose purpose is to tell me information about the product. That's it. It's not rocket science and not hard to describe. Where confusion can creep in is when talk about different types of automation and think the goal of that particular automation is the only goal.
An example here is performance testing. I can create a test that types 5000 words on a page and runs spell check. This test can measure the time spell check takes to run, memory usage, file i/o, etc… and if properly constructed, can give valuable information about the performance of OneNote. If I wanted to make this a stress test, I could intentionally misspell all 5000 words. If I wanted to mimic user behavior, I could gather data about the typical frequency of misspelled words and match my text to that data. But I don't have to do this - I can focus instead on the edge case of how long OneNote takes to correct the spelling of 5000 words. This can let the team make decisions about where we need to focus efforts on performance for the product.
So the overall goal is to give me data about the performance of spell checking. I have two methods I can use to get that data - the stress case of 5000 words in a row that are wrong, or mimic the human behavior and match the expected percentage of misspelled words. (Since this is all going to run in a lab, I can do both, but bear with me). Each case has a different methodology - mimic human behavior or not - and in this case both get me to my goal of telling me about the product.
Back to this specific example of tracking whether a file was accessed properly. A quick definition of properly - the file is accessed only once, only the needed data from it is read, then the file is released. Clearly, users are not going to directly see this behavior, but can indirectly tell if the test is failing - the hard drive "grinds away" would be one way the user can tell the file is not being accessed a minimum number of times, for instance. Now, to cause the file to be accessed, OneNote needs to sync to the server. There are a few ways to do this. One is to let OneNote sit idle for a short period of time, and sync will kick in. This is the most frequent way OneNote syncs. Alternately, you can type SHIFT+F9 to force a sync, or use the folder properties sync dialog, or add a command to the ribbon, etc… So the method she chose was a forced sync - she did not want to pay the time to let the machine sit idle. She wanted the test to run quickly and get the results back. She can add a second test later that verifies the sync happens at idle and focus that test only on that functionality since she has a test which verifies the file will be accessed properly once sync happens. No need to duplicate this verification (which can be tricky).
I'll close that by saying all the different types of automation - performance, security, functionally, unit tests, etc… - all serve to tell us data about OneNote. The particular methodology of the test is merely a means to that end, and each has its purpose.
Questions, comments, concerns and criticisms always welcome,
John
Comments
- Anonymous
November 22, 2011
As your thinking,fully agree with your thoughts. Continue to write <a href="www.mlfhardwoodflooringltd.ca ">hardwood flooring in toronto</a> and tell us a great job